Multi-agent cooperation and competition has the characteristics of real-time and action continuity, incomplete information, huge search space, multi-complex tasks and time-space inference, etc. It is one of the most challenging problems in the current artificial intelligence field. Aiming at the problem of long training time for large-scale multi-agent reinforcement learning, this paper proposes an Actor-Critic-based cooperative confrontation framework, which uses meta curriculum reinforcement learning method to extract meta-models of basic tasks for small-scale scenarios. We carry out model migration to large-scale scenarios based on the curriculum learning, which continues training based on the meta-models and finally obtains a better collaboration strategy. This paper conducts simulation experiments on the "Star-Craft II" platform. The results show that the multi-agent cooperative confrontation technology based on the meta curriculum reinforcement learning can effectively accelerate the training process, and can achieve a higher win rate within a shorter time compared with the traditional training methods. The training speed is increased by about 40%. This method can effectively support the efficient generation of multi-agent cooperative confrontation strategies.